5 - Diagnostic Medical Image Processing (DMIP) 2010/11 [ID:1085]
50 von 333 angezeigt

Okay, so let's start. Welcome everybody. We are currently in the chapter on image undistortion,

how to compute the undistortion mapping in an image intensifier based system. And basically

we have started out with a very practical problem. We have the image intensifier with

the electron optics and this amplification system that shows some interaction with the magnetic field

and the result is a distorted image. Then we mapped everything to a nice mathematical problem

and now we are massaging the math that is involved and I can do a lot of theory that is very important

for the design of image processing algorithms for many, many applications. So these concepts that

we are currently discussing, they are not limited to the undistortion problem. The undistortion

problem is basically the practical motivation for all the concepts we are currently discussing.

And on Mondays we have only 45 minutes, so I do not consider the big picture. I will do this

tomorrow morning. Now I will just briefly remind you what we discussed last time and how we started

out to discuss the bootstrapping idea. So basically what we have seen last time is we have seen that

the undistortion problem can be transformed into a linear optimization problem and we have seen that

we can compute a measurement matrix that basically consists of the points, the corresponding points

and powers and products of the corresponding points. And this matrix M is applied to the

parameters x of the polynomial and we end up with an observed vector B. And due to the fact that we

have noisy data, that we have data that was acquired by a sensor, we can be sure that the

identity does not hold or in other words we can say with a probability of one that the vector B is

not in the range of A. What does that mean basically? The vector B is not in the range of A.

I mean for us it's more and more important not just to remember the methods of the first year

or second year of your studies here. For us it's important to understand the intrinsics and to come

up with a geometric interpretation what's actually written here. And this matrix M can be also

understood as a vector of vectors. So you can consider here the column vector M1, the column

vector M2 and so on. So you can consider your matrix as a sequence of vectors multiplied with x

and x is x1, x2 up to xn. And this is basically nothing else but mi multiplied with xi. So what

we are considering is nothing else but a linear combination of the column vectors of the matrix M.

And the linear combination of the column vectors of the matrix M are required to be the vector B.

And if the B is not in the range of a matrix that basically means there are no coefficients xi

of these column vectors that end up in a vector B. So the range of a matrix is nothing else but

the set of all the vectors that can be reached by a linear combination of the column vectors of the

matrix. So if B is not in the range of the matrix M what can we do instead of that? We can say we

look for an x such that this here is minimized. We look for a vector x such that the difference

vector of Mx and B has minimum Euclidean length. That's how we solve this problem.

Compute x such that the linear combination of the column vectors weighted by xi is as close as

possible to B in terms of the Euclidean distance. Good. And we know how to solve that. Martin?

Just three letters. SVD. Of course we can solve this with SVD. It's kind of obvious this time.

Pardon me? It's kind of obvious this time. It's always obvious. All the answers are obvious.

From my point of view all the answers are obvious. So B is

this. M transpose M to the power of minus one M transpose. Sorry, x is this. What am I doing here?

And last time we discussed an important thing. We discussed

how well conditioned this problem is. And we talked about a proper scaling of the measurements

such that we can solve this reliably. And we said we compute the scaling of the x and y dimension

such that the condition number of this matrix is minimum or maximum.

Minimum or minimal. Such that this condition number is minimum. So that's a crucial thing.

And then we talked about fair parameterization. That means we looked for parameterizations that

are basically independent of the orientation and position of the coordinate system. And

I was asking the question last time. How can we compute the variance of the estimated parameter?

And this is the picture you have to remember for the oral exam. This is the picture to know.

This is the picture

you should remember.

Zugänglich über

Offener Zugang

Dauer

00:45:57 Min

Aufnahmedatum

2010-11-08

Hochgeladen am

2011-04-11 13:53:29

Sprache

de-DE

Einbetten
Wordpress FAU Plugin
iFrame
Teilen